European lawmakers are also accelerating negotiations to pass sweeping new AI rulesNews 

Scientists warn of the dangers of artificial intelligence, but are not unanimous on the solutions

Computer scientists who helped build the foundation of today’s AI technology warn of its dangers, but that doesn’t mean they agree on what those dangers are or how to prevent them.

After retiring from Google to speak more freely, Geoffrey Hinton, the so-called godfather of artificial intelligence, plans to air his concerns Wednesday at a conference at the Massachusetts Institute of Technology. He has already expressed his regrets about his work and that he doubts the survival of humanity if machines become smarter than humans.

AI pioneer Yoshua Bengio, co-winner of the Best Computer Science Award with Hinton, told The Associated Press on Wednesday that he is “pretty much in line” with Hinton’s concerns raised by chatbots like ChatGPT and related technology, but worries that saying “We are doomed ” Does not help.

“I would say the biggest difference is that he’s kind of a pessimistic person, and I’m more on the optimistic side,” said Bengio, a professor at the University of Montreal. “I think the dangers — short-term and long-term dangers — are very serious and need to be taken seriously by a few scientists, but also by governments and the general public.”

There are many signs that governments are listening. The White House has invited the CEOs of Google, Microsoft and ChatGPT maker OpenAI to meet with Vice President Kamala Harris on Thursday for what officials describe as an open discussion about how to mitigate both short- and long-term risks. their technology. European lawmakers are also speeding up negotiations to approve new large-scale artificial intelligence rules.

But all the talk of dire future dangers has some worried that the hype about superhuman machines — which don’t exist yet — is interfering with efforts to put practical safeguards on existing AI products, which are largely unregulated.

Margaret Mitchell, former head of Google’s artificial intelligence ethics team, said she was shocked that Hinton did not speak out during her tenure at Google, especially after Timnit Gebru, a prominent black scientist who researched the downsides, was ousted in 2020. from large language models before they were widely commercialized into products such as ChatGPT and Google’s Bard.

“It’s a privilege to get away from the reality of rampant discrimination, hate speech, toxicity, female pornography, all these things that are actively harming people who are marginalized in tech,” said Mitchell, who was also forced out of Google after Gebru’s departure. “He’s skipping all those things to worry about something more distant.”

Bengio, Hinton and a third researcher, Yann LeCun, who works at Facebook’s parent company Meta, received the Turing Award in 2019 for their breakthroughs in artificial neural networks, which helped develop today’s AI applications such as ChatGPT.

Bengio, the only one of the three who did not take a job with the tech giant, has for years expressed concerns about the near-term risks of artificial intelligence, including job market instability, automated weapons and the dangers of biased data sets.

But those concerns have grown recently, with Bengio joining other computer scientists and tech executives such as Elon Musk and Apple co-founder Steve Wozniak in calling for a six-month pause in the development of AI systems more powerful than OpenAI’s latest model, GPT-4. .

Bengio said Wednesday that he believes the latest AI models already pass the “Turing test,” named after a method introduced in 1950 by British codebreaker and AI pioneer Alan Turing to measure when AI becomes indistinguishable from humans — at least on the surface.

“It’s a milestone that could have drastic consequences if we’re not careful,” Bengio said. “My primary concern is how they can be used for nefarious purposes to subvert democracies, cyberattacks and disinformation. You can talk to these systems and think you are interacting with a human. They’re hard to spot.”

Researchers are unlikely to agree on how current AI language systems—which have many limitations, including a tendency to fabricate information—will actually become smarter than humans.

Aidan Gomez was one of the authors of a groundbreaking 2017 paper that introduced a so-called transformer technique—the “T” at the end of ChatGPT—to improve the performance of machine learning systems, particularly in how they learn from points. from the text. Then just a 20-year-old Google intern, Gomez remembers lying on a couch in the company’s California headquarters when his team sent out the paper around 3 a.m.

“Idan, this is going to be so huge,” he remembers a colleague telling him about the work, which has since helped lead to new systems that can generate human prose and imagery.

Six years later, and now the CEO of his own AI company, Cohere, Gomez is excited about the potential applications of these systems, but disturbingly fear-mongering, he says he’s “disconnected” from their true capabilities and “relies on extraordinary imagination and reasoning.”

“The idea that these models somehow get access to our nuclear weapons and trigger some kind of extinction event is not a productive conversation,” Gomez said. “It’s counterproductive to those real pragmatic political efforts that are trying to do something good.”

Read all the Latest Tech News here.

Related posts

Leave a Comment